skip to main content


Search for: All records

Creators/Authors contains: "Diao, Chunyuan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available November 1, 2024
  2. Free, publicly-accessible full text available November 1, 2024
  3. Free, publicly-accessible full text available August 1, 2024
  4. Detecting crop phenology with satellite time series is important to characterize agroecosystem energy-water-carbon fluxes, manage farming practices, and predict crop yields. Despite the advances in satellite-based crop phenological retrievals, interpreting those retrieval characteristics in the context of on-the-ground crop phenological events remains a long-standing hurdle. Over the recent years, the emergence of near-surface phenology cameras (e.g., PhenoCams), along with the satellite imagery of both high spatial and temporal resolutions (e.g., PlanetScope imagery), has largely facilitated direct comparisons of retrieved characteristics to visually observed crop stages for phenological interpretation and validation. The goal of this study is to systematically assess near-surface PhenoCams and high-resolution PlanetScope time series in reconciling sensor- and ground-based crop phenological characterizations. With two critical crop stages (i.e., crop emergence and maturity stages) as an example, we retrieved diverse phenological characteristics from both PhenoCam and PlanetScope imagery for a range of agricultural sites across the United States. The results showed that the curvature-based Greenup and Gu-based Upturn estimates showed good congruence with the visually observed crop emergence stage (RMSE about 1 week, bias about 0–9 days, and R square about 0.65–0.75). The threshold- and derivative-based End of greenness falling Season (i.e., EOS) estimates reconciled well with visual crop maturity observations (RMSE about 5–10 days, bias about 0–8 days, and R square about 0.6–0.75). The concordance among PlanetScope, PhenoCam, and visual phenology demonstrated the potential to interpret the fine-scale sensor-derived phenological characteristics in the context of physiologically well-characterized crop phenological events, which paved the way to develop formal protocols for bridging ground-satellite phenological characterization. 
    more » « less
  5. Dense time-series remote sensing data with detailed spatial information are highly desired for the monitoring of dynamic earth systems. Due to the sensor tradeoff, most remote sensing systems cannot provide images with both high spatial and temporal resolutions. Spatiotemporal image fusion models provide a feasible solution to generate such a type of satellite imagery, yet existing fusion methods are limited in predicting rapid and/or transient phenological changes. Additionally, a systematic approach to assessing and understanding how varying levels of temporal phenological changes affect fusion results is lacking in spatiotemporal fusion research. The objective of this study is to develop an innovative hybrid deep learning model that can effectively and robustly fuse the satellite imagery of various spatial and temporal resolutions. The proposed model integrates two types of network models: super-resolution convolutional neural network (SRCNN) and long short-term memory (LSTM). SRCNN can enhance the coarse images by restoring degraded spatial details, while LSTM can learn and extract the temporal changing patterns from the time-series images. To systematically assess the effects of varying levels of phenological changes, we identify image phenological transition dates and design three temporal phenological change scenarios representing rapid, moderate, and minimal phenological changes. The hybrid deep learning model, alongside three benchmark fusion models, is assessed in different scenarios of phenological changes. Results indicate the hybrid deep learning model yields significantly better results when rapid or moderate phenological changes are present. It holds great potential in generating high-quality time-series datasets of both high spatial and temporal resolutions, which can further benefit terrestrial system dynamic studies. The innovative approach to understanding phenological changes’ effect will help us better comprehend the strengths and weaknesses of current and future fusion models. 
    more » « less